Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
1.
Indian J Dermatol Venereol Leprol ; 2023 Aug; 89(4): 549-552
Article | IMSEAR | ID: sea-223157

ABSTRACT

Artificial intelligence (AI), a major frontier in the field of medical research, can potentially lead to a paradigm shift in clinical practice. A type of artificial intelligence system known as convolutional neural network points to the possible utility of deep learning in dermatopathology. Though pathology has been traditionally restricted to microscopes and glass slides, recent advancement in digital pathological imaging has led to a transition making it a potential branch for the implementation of artificial intelligence. The current application of artificial intelligence in dermatopathology is to complement the diagnosis and requires a well-trained dermatopathologist’s guidance for better designing and development of deep learning algorithms. Here we review the recent advances of artificial intelligence in dermatopathology, its applications in disease diagnosis and in research, along with its limitations and future potential

2.
Chinese Journal of Radiation Oncology ; (6): 319-324, 2023.
Article in Chinese | WPRIM | ID: wpr-993194

ABSTRACT

Objective:To develop a multi-scale fusion and attention mechanism based image automatic segmentation method of organs at risk (OAR) from head and neck carcinoma radiotherapy.Methods:We proposed a new OAR segmentation method for medical images of heads and necks based on the U-Net convolution neural network. Spatial and channel squeeze excitation (csSE) attention block were combined with the U-Net, aiming to enhance the feature expression ability. We also proposed a multi-scale block in the U-Net encoding stage to supplement characteristic information. Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) were used as evaluation criteria for deep learning performance.Results:The segmentation of 22 OAR in the head and neck was performed according to the medical image computing computer assisted intervention (MICCAI) StructSeg2019 dataset. The proposed method improved the average segmentation accuracy by 3%-6% compared with existing methods. The average DSC in the segmentation of 22 OAR in the head and neck was 78.90% and the average 95%HD was 6.23 mm.Conclusion:Automatic segmentation of OAR from the head and neck CT using multi-scale fusion and attention mechanism achieves high segmentation accuracy, which is promising for enhancing the accuracy and efficiency of radiotherapy in clinical practice.

3.
Clinical Medicine of China ; (12): 201-205, 2023.
Article in Chinese | WPRIM | ID: wpr-992489

ABSTRACT

In recent years, artificial intelligence technology has made a number of technical progress in almost all fields, including the medical field. At present, AI-assisted upper gastrointestinal endoscopy has been introduced into clinical practice as a clinical decision support tool.With the help of artificial intelligence and the expertise of endoscopy experts, artificial intelligence is expected to be an effective tool to improve the diagnostic ability of endoscopy,especially for endoscopy beginners and inexperienced endoscopists.The emergence of artificial intelligence is of great significance to improve the working efficiency and diagnostic ability of endoscopists. However, the application of artificial intelligence in upper gastrointestinal endoscopy is still in the exploratory stage and has not been widely applied in clinical practice.

4.
Chinese Journal of Experimental Traditional Medical Formulae ; (24): 133-140, 2023.
Article in Chinese | WPRIM | ID: wpr-975165

ABSTRACT

Chinese herbal piece is an important component of the traditional Chinese medicine (TCM) system, and identifying their quality and grading can promote the development and utilization of Chinese herbal pieces. Utilizing deep learning for intelligent identification of Chinese herbal pieces can save time, effort, and cost, while also reasonably avoiding the constraints of human subjectivity, providing a guarantee for efficient identification of Chinese herbal pieces. In this study, a dataset containing 108 kinds of Chinese herbal pieces (14 058 images) was constructed,the basic YOLOv4 algorithm was employed to identify the 108 kinds of Chinese herbal pieces of our database The mean average precision (mAP) of the developed basic YOLOv4 model reached 85.3%. In addition, the receptive field block was introduced into the neck network of YOLOv4 algorithm, and the improved YOLOv4 algorithm was used to identify Chinese herbal pieces. The mAPof the improved YOLOv4 model achieved 88.7%, the average precision of 80 kinds of decoction pieces exceeded 80%, the average precision of 48 kinds of decoction pieces exceeded 90%. These results indicate that adding the receptive field module can help to some extent in the identification of Chinese herbal medicine pieces with different sizes and small volumes. Finally, the average precision of each kind of Chinese herbal medicine piece by the improved YOLOv4 model was further analyzed. Through in-depth analysis of the original images of Chinese herbal medicine pieces with low prediction average precision, it was clarified that the quantity and quality of original images of Chinese herbal medicine pieces are key to performing intelligent object detection. The improved YOLOv4 model constructed in this study can be used for the rapid identification of Chinese herbal pieces, and also provide reference guidance for the manual authentication of Chinese herbal medicine decoction pieces.

5.
International Eye Science ; (12): 1007-1011, 2023.
Article in Chinese | WPRIM | ID: wpr-973795

ABSTRACT

In recent years, ophthalmology, as one of the medical fields highly dependent on auxiliary imaging, has been at the forefront of the application of deep learning algorithm. The morphological changes of the choroid are closely related to the occurrence, development, treatment and prognosis of fundus diseases. The rapid development of optical coherence tomography has greatly promoted the accurate analysis of choroidal morphology and structure. Choroidal segmentation and related analysis are crucial for determining the pathogenesis and treatment strategies of eye diseases. However, currently, choroidal mainly relies on tedious, time-consuming, and low-reproducibility manual segmentation. To overcome these difficulties, deep learning methods for choroidal segmentation have been developed in recent years, greatly improving the accuracy and efficiency of choroidal segmentation. The purpose of this paper is to review the features of choroidal thickness in different eye diseases, explore the latest applications and advantages of deep learning models in measuring choroidal thickness, and focus on the challenges faced by deep learning models.

6.
International Eye Science ; (12): 299-304, 2023.
Article in Chinese | WPRIM | ID: wpr-960955

ABSTRACT

AIM: To establish an intelligent diagnostic model of keratoconus for small-diameter corneas by data mining and analysis of patients' clinical data.METHODS: Diagnostic study. A total of 830 patients(830 eyes)were collected, including 338 male(338 eyes)and 492 female(492 eyes), with an average age of 14-36(23.19±5.71)years. Among them, 731 patients(731 eyes)had undergone corneal refractive surgery at Chongqing Nanping Aier Eye Hospital from January 2020 to March 2022, and 99 patients had a diagnosed keratoconus from January 2015 to March 2022. Corneal diameter ≤11.1 mm was measured by Pentacam in all patients. Two cornea specialists classified patients' data into normal corneas, suspect keratoconus, and keratoconus groups based on the Belin/Ambrósio enhanced ectasia display(BAD)system in Pentacam. The data of 665 patients were randomly selected as the training set and the other 165 patients as the validation set by computer random sampling method. Seven parametric corneal features were extracted by convolutional neural networks(CNN), and the models were built by Residual Network(ResNet), Vision Transformer(ViT), and CNN+Transformer, respectively. The diagnostic accuracy of models was verified by cross-entropy loss and cross-validation method. In addition, sensitivity and specificity were evaluated using receiver operating characteristic curve.RESULTS: The accuracy of ResNet, ViT, and CNN+Transfermer for the diagnosis of normal cornea and suspect keratoconus was 85.57%, 86.11%, and 86.54% respectively, and the area under the receiver operating characteristic curve(AUC)was 0.823, 0.830 and 0.842 respectively. The accuracy of models for the diagnosis of suspect keratoconus and keratoconus was 97.22%, 95.83%, and 98.61%, respectively, and the AUC was 0.951, 0.939, and 0.988 respectively.CONCLUSION: For corneas ≤11.1 mm in diameter, the data model established by CNN+Transformer has a high accuracy rate for classifying keratoconus, which provides real and effective guidance for early screening.

7.
Cancer Research on Prevention and Treatment ; (12): 512-517, 2023.
Article in Chinese | WPRIM | ID: wpr-986224

ABSTRACT

Objective To understand the research hotspots and research trends about convolutional neural networks in the field of oncology imaging diagnosis by analyzing the characteristics of published literature at home and abroad over the past decade. Methods The SCI-E database was used as the data source to retrieve literature about convolutional neural networks in the field of oncology imaging diagnosis published from 2012 to 2022. The distribution characteristics of countries, institutions, journals, co-cited authors, and keywords of the studies were analyzed by CiteSpace software. Results A total of 1088 papers were eventually included, and they were mostly from China, the United States, and India. A total of 39 papers were published by Sun Yat-sen University, the research institution with the highest number of publications. Radiology Nuclear Medicine Medical Imaging was the journal with the highest number of publications. A total of 25 high-frequency keywords and 15 burst keywords were obtained. The formation of 12 author co-citation clusters such as image segmentation and lung nodule, as well as 11 keyword clusters such as automatic segmentation and breast cancer, was observed. Conclusion Current research on convolutional neural networks for oncology imaging diagnosis focuses on oncology segmentation, lung-nodule recognition, assisted diagnosis of breast cancer, and other high-frequency oncology.

8.
Journal of Biomedical Engineering ; (6): 217-225, 2023.
Article in Chinese | WPRIM | ID: wpr-981532

ABSTRACT

Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disease. Neuroimaging based on magnetic resonance imaging (MRI) is one of the most intuitive and reliable methods to perform AD screening and diagnosis. Clinical head MRI detection generates multimodal image data, and to solve the problem of multimodal MRI processing and information fusion, this paper proposes a structural and functional MRI feature extraction and fusion method based on generalized convolutional neural networks (gCNN). The method includes a three-dimensional residual U-shaped network based on hybrid attention mechanism (3D HA-ResUNet) for feature representation and classification for structural MRI, and a U-shaped graph convolutional neural network (U-GCN) for node feature representation and classification of brain functional networks for functional MRI. Based on the fusion of the two types of image features, the optimal feature subset is selected based on discrete binary particle swarm optimization, and the prediction results are output by a machine learning classifier. The validation results of multimodal dataset from the AD Neuroimaging Initiative (ADNI) open-source database show that the proposed models have superior performance in their respective data domains. The gCNN framework combines the advantages of these two models and further improves the performance of the methods using single-modal MRI, improving the classification accuracy and sensitivity by 5.56% and 11.11%, respectively. In conclusion, the gCNN-based multimodal MRI classification method proposed in this paper can provide a technical basis for the auxiliary diagnosis of Alzheimer's disease.


Subject(s)
Humans , Alzheimer Disease/diagnostic imaging , Neurodegenerative Diseases , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Neuroimaging/methods , Cognitive Dysfunction/diagnosis
9.
Chinese Journal of Medical Instrumentation ; (6): 38-42, 2023.
Article in Chinese | WPRIM | ID: wpr-971300

ABSTRACT

Accurate segmentation of retinal blood vessels is of great significance for diagnosing, preventing and detecting eye diseases. In recent years, the U-Net network and its various variants have reached advanced level in the field of medical image segmentation. Most of these networks choose to use simple max pooling to down-sample the intermediate feature layer of the image, which is easy to lose part of the information, so this study proposes a simple and effective new down-sampling method Pixel Fusion-pooling (PF-pooling), which can well fuse the adjacent pixel information of the image. The down-sampling method proposed in this study is a lightweight general module that can be effectively integrated into various network architectures based on convolutional operations. The experimental results on the DRIVE and STARE datasets show that the F1-score index of the U-Net model using PF-pooling on the STARE dataset improved by 1.98%. The accuracy rate is increased by 0.2%, and the sensitivity is increased by 3.88%. And the generalization of the proposed module is verified by replacing different algorithm models. The results show that PF-pooling has achieved performance improvement in both Dense-UNet and Res-UNet models, and has good universality.


Subject(s)
Algorithms , Retinal Vessels , Image Processing, Computer-Assisted
10.
Indian J Pathol Microbiol ; 2022 May; 65(1): 226-229
Article | IMSEAR | ID: sea-223284

ABSTRACT

Machine learning and artificial intelligence (AI) have become a part of our daily routine. There are very few of us who are not influenced by this technology. There are a lot of misconceptions about the scope, utility, and fallacies of AI. Digital neuropathology is an evolving area of research. The importance of digital image processing stems from the rapid gains in computer vision and image processing that have happened in the past decade thanks to advancements in deep learning (DL). The article attempts to present to the audience a simple presentation of the technology and attempts to provide a context-based understanding of the DL process for image processing. Also highlighted are current challenges and the roadblocks in adopting the technology in routine neuropathology.

11.
Journal of Clinical Hepatology ; (12): 26-29, 2022.
Article in Chinese | WPRIM | ID: wpr-913152

ABSTRACT

In the era of medical big data, artificial intelligence is increasingly widely used in medicine. Efficient management and information mining of massive medical data can obtain useful information on disease development, progression, survival, and prognosis. In recent years, some achievements have been made in the application of artificial intelligence in primary liver cancer. This article elaborates on the current status and prospects of its application in the diagnosis and treatment of liver cancer.

12.
Chinese Journal of Radiation Oncology ; (6): 266-271, 2022.
Article in Chinese | WPRIM | ID: wpr-932665

ABSTRACT

Objective:Hybrid attention U-net (HA-U-net) neural network was designed based on U-net for automatic delineation of craniospinal clinical target volume (CTV) and the segmentation results were compared with those of U-net automatic segmentation model.Methods:The data of 110 craniospinal patients were reviewed, Among them, 80 cases were selected for the training set, 10 cases for the validation set and 20 cases for the test set. HA-U-net took U-net as the basic network architecture, double attention module was added at the input of U-net network, and attention gate module was combined in skip-connection to establish the craniospinal automatic delineation network model. The evaluation parameters included Dice similarity coefficient (DSC), Hausdorff distance (HD) and precision.Results:The DSC, HD and precision of HA-U-net network were 0.901±0.041, 2.77±0.29 mm and 0.903±0.038, respectively, which were better than those of U-net (all P<0.05). Conclusion:The results show that HA-U-net convolutional neural network can effectively improve the accuracy of automatic segmentation of craniospinal CTV, and help doctors to improve the work efficiency and the consistent delineation of CTV.

13.
Journal of Biomedical Engineering ; (6): 1089-1096, 2022.
Article in Chinese | WPRIM | ID: wpr-970646

ABSTRACT

Aiming at the problem that the unbalanced distribution of data in sleep electroencephalogram(EEG) signals and poor comfort in the process of polysomnography information collection will reduce the model's classification ability, this paper proposed a sleep state recognition method using single-channel EEG signals (WKCNN-LSTM) based on one-dimensional width kernel convolutional neural networks(WKCNN) and long-short-term memory networks (LSTM). Firstly, the wavelet denoising and synthetic minority over-sampling technique-Tomek link (SMOTE-Tomek) algorithm were used to preprocess the original sleep EEG signals. Secondly, one-dimensional sleep EEG signals were used as the input of the model, and WKCNN was used to extract frequency-domain features and suppress high-frequency noise. Then, the LSTM layer was used to learn the time-domain features. Finally, normalized exponential function was used on the full connection layer to realize sleep state. The experimental results showed that the classification accuracy of the one-dimensional WKCNN-LSTM model was 91.80% in this paper, which was better than that of similar studies in recent years, and the model had good generalization ability. This study improved classification accuracy of single-channel sleep EEG signals that can be easily utilized in portable sleep monitoring devices.


Subject(s)
Memory, Short-Term , Neural Networks, Computer , Sleep , Electroencephalography/methods , Algorithms
14.
International Eye Science ; (12): 1016-1019, 2022.
Article in Chinese | WPRIM | ID: wpr-924225

ABSTRACT

@#AIM: To study the precise segmentation of pterygium lesions using the convolutional neural networks from artificial intelligence.<p>METHODS: The network structure of Phase-fusion PSPNet for the segmentation of pterygium lesions is proposed based on the PSPNet model structure. In our network, the up-sampling module is connected behind the pyramid pooling module, which gradually increase the sampling based on the principle of phased increase. Therefore, the information loss is reduced, it is suitable for segmentation tasks with fuzzy edges. The experiments conducted on the dataset provided by the Affiliated Eye Hospital of Nanjing Medical University, which includes 517 ocular surface photographic images of pterygium were divided into training set(330 images), validation set(37 images)and test set(150 images), which the training set and the validation set images are used for training, and the test set images are only used for testing. Comparing results of intelligent segmentation and expert annotation of pterygium lesions.<p>RESULTS: Phase-fusion PSPNet network structure for pterygium mean intersection over union(MIOU)and mean average precision(MPA)were 86.31% and 91.91%, respectively, and pterygium intersection over union(IOU)and average precision(PA)were 77.64% and 86.10%, respectively.<p>CONCLUSION: Convolutional neural networks can segment pterygium lesions with high precision, which is helpful to provide an important reference for doctors' further diagnosis of disease and surgical recommendations, and can also visualize the pterygium intelligent diagnosis.

15.
International Eye Science ; (12): 1016-1019, 2022.
Article in Chinese | WPRIM | ID: wpr-924224

ABSTRACT

@#AIM: To study the precise segmentation of pterygium lesions using the convolutional neural networks from artificial intelligence.<p>METHODS: The network structure of Phase-fusion PSPNet for the segmentation of pterygium lesions is proposed based on the PSPNet model structure. In our network, the up-sampling module is connected behind the pyramid pooling module, which gradually increase the sampling based on the principle of phased increase. Therefore, the information loss is reduced, it is suitable for segmentation tasks with fuzzy edges. The experiments conducted on the dataset provided by the Affiliated Eye Hospital of Nanjing Medical University, which includes 517 ocular surface photographic images of pterygium were divided into training set(330 images), validation set(37 images)and test set(150 images), which the training set and the validation set images are used for training, and the test set images are only used for testing. Comparing results of intelligent segmentation and expert annotation of pterygium lesions.<p>RESULTS: Phase-fusion PSPNet network structure for pterygium mean intersection over union(MIOU)and mean average precision(MPA)were 86.31% and 91.91%, respectively, and pterygium intersection over union(IOU)and average precision(PA)were 77.64% and 86.10%, respectively.<p>CONCLUSION: Convolutional neural networks can segment pterygium lesions with high precision, which is helpful to provide an important reference for doctors' further diagnosis of disease and surgical recommendations, and can also visualize the pterygium intelligent diagnosis.

16.
rev. udca actual. divulg. cient ; 24(2): e1917, jul.-dic. 2021. tab, graf
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1361222

ABSTRACT

RESUMEN La presencia del tizón tardío o gota en el cultivo de papa afecta directamente el crecimiento de la planta y el desarrollo del tubérculo, por ello, es importante la detección temprana de la enfermedad. Actualmente, la aplicación de redes neuronales convolucionales es una oportunidad orientada a la identificación de patrones en la agricultura de precisión, incluyendo el estudio del tizón tardío, en el cultivo de papa. Este estudio describe un modelo de aprendizaje profundo capaz de reconocer el tizón tardío en el cultivo de papa, por medio de la clasificación de imágenes de las hojas. Se utilizó, en la aplicación de este modelo, el conjunto de datos aumentado de PlantVillage, para entrenamiento. El modelo propuesto ha sido evaluado a partir de métricas de rendimiento, como precisión, sensibilidad, puntaje F1 y exactitud. Para verificar la efectividad del modelo en la identificación y la clasificación del tizón tardío y comparado en rendimiento con arquitecturas. como AlexNet, ZFNet, VGG16 y VGG19. Los resultados experimentales obtenidos con el conjunto de datos seleccionado mostraron que el modelo propuesto alcanza una exactitud del 90 % y un puntaje F1, del 91 %. Por lo anterior, se concluye que el modelo propuesto es una herramienta útil para los agricultores en la identificación del tizón tardío y escalable a plataformas móviles, por la cantidad de parámetros que lo comprenden.


ABSTRACT The presence of late blight in potato crops directly affects plant growth and tuber development; therefore, early detection of the disease is important. Currently, the application of convolutional neural networks is an opportunity oriented to the identification of patterns in precision agriculture, including the study of late blight in potato crops. This study describes a deep learning model capable of recognizing late blight in potato crops by means of leaf image classification. The PlantVillage augmented dataset was used in the application of this model for training. The proposed model has been evaluated from performance metrics such as precision, sensitivity, F1 score, and accuracy; to verify the effectiveness of the model in the identification and classification of late blight and compared in performance with architectures such as AlexNet, ZFNet, VGG16, and VGG19. The experimental results obtained with the selected data set showed that the proposed model achieves an accuracy of 90 % and an F1 score of 91 %. Therefore, it is concluded that the proposed model is a useful tool for farmers in the identification of late blight and scalable to mobile platforms due to the number of parameters that comprise it.

17.
The Philippine Journal of Nuclear Medicine ; : 46-53, 2021.
Article in English | WPRIM | ID: wpr-976345

ABSTRACT

Background@#Numerous applications of artificial intelligence have been applied in radiological imaging ranging from computer-aided diagnosis based on machine learning to deep learning using convolutional neural networks. One of the nuclear medicine imaging tests being commonly performed today is bone scan. The use of deep learning methods through convolutional neural networks in bone scintigrams has not been fully explored. Very few studies have been published on its diagnostic capability of convolutional neural networks in assessing osseous metastasis. @*Objective@#The aim of our study is to assess the classification performance of the pre-trained convolutional neural networks in the diagnosis of bone metastasis from whole body bone scintigrams of a local institutional dataset. @*Methods@#Bone scintigrams from all types of cancer were retrospectively reviewed during the period 2019-2020 at the University of Perpetual Help Medical Center in Las Pinas City, Metro Manila. The study was approved by the Institutional Ethical Review Board and Technical Review Board of the medical center. Bone scan studies should be mainly for metastasis screening. The pre-processing techniques consisting of image normalization, image augmentation, data shuffling, and train-test split (testing at 30% and the rest (70%) was split 85% for training and 15% for validation) were applied to image dataset. Three pre-trained architectures (ResNet50, VGG19, DenseNet121) were applied to the processed dataset. Performance metrics such as accuracy, recall (sensitivity), precision (positive predictive value), and F1-scores were obtained.@*Results@#A total of 570 bone scan images with dimension 220 x 646 pixel sizes in .tif file format were included in this study with 40% classified with bone metastasis while 60% were classified as without bone metastasis. DenseNet121 yielded the highest performance metrics with an accuracy rate of 83%, 76% recall, 86% precision, and 81% F1-score. ResNet50 and VGG19 had similar performance with each other across all metrics but generally lower predictive capability as compared to DenseNet121.@*Conclusion@#A bone metastasis machine learning classification study using three pre-trained convolutional neural networks was performed on a local medical center bone scan dataset via transfer learning. DenseNet121 generated the highest performance metrics with 83% accuracy, 76% recall, 86% precision and 81% F1-score. Our simulation experiments generated promising outcomes and potentially could lead to its deployment in the clinical practice of nuclear medicine physicians. The use of deep learning techniques through convolutional neural networks has the potential to improve diagnostic capability of nuclear medicine physicians using bone scans for the assessment of metastasis.


Subject(s)
Deep Learning , Machine Learning
18.
Journal of Zhejiang University. Science. B ; (12): 462-475, 2021.
Article in English | WPRIM | ID: wpr-880751

ABSTRACT

To overcome the computational burden of processing three-dimensional (3D) medical scans and the lack of spatial information in two-dimensional (2D) medical scans, a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2D convolutional neural networks (2D-CNNs). In order to combine the low-level features and high-level features, we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process. Further, in order to resolve the problems of the blurred boundary of the glioma edema area, we superimposed and fused the T2-weighted fluid-attenuated inversion recovery (FLAIR) modal image and the T2-weighted (T2) modal image to enhance the edema section. For the loss function of network training, we improved the cross-entropy loss function to effectively avoid network over-fitting. On the Multimodal Brain Tumor Image Segmentation Challenge (BraTS) datasets, our method achieves dice similarity coefficient values of 0.84, 0.82, and 0.83 on the BraTS2018 training; 0.82, 0.85, and 0.83 on the BraTS2018 validation; and 0.81, 0.78, and 0.83 on the BraTS2013 testing in terms of whole tumors, tumor cores, and enhancing cores, respectively. Experimental results showed that the proposed method achieved promising accuracy and fast processing, demonstrating good potential for clinical medicine.

19.
Chinese Journal of Behavioral Medicine and Brain Science ; (12): 955-960, 2021.
Article in Chinese | WPRIM | ID: wpr-909549

ABSTRACT

In psychiatry, observation of the patients is often an important basis for making a diagnosis during clinical practice. However, changes in emotional facial expressions are often subtle and difficult to detect. For this reason, automated facial expression recognition can be used to assist in identifying mental disorders. Facial expression is one of the important ways of emotional expression, and strong similarities of basic human facial expression are not affected by cultural background or congenital blindness. With the development of computer science, facial expression recognition methods are also constantly improving. Among them, deep-learning-based facial expression recognition approaches, with their powerful information processing capabilities, highly reduce the dependence on face-physics-based models and other pre-processing techniques by using trainable feature extraction models to automatically learn representations from images and videos. This article focuses on the progress of facial expression recognition system in the diagnosis and treatment of schizophrenia, depression, borderline personality disorder, autism spectrum disorder and other diseases. This article also explores the application of facial expression recognition technology in the field of psychiatry and remote psychology intervention.

20.
Article | IMSEAR | ID: sea-210224

ABSTRACT

A brain tumoris a mass of abnormal cells in the brain. Brain tumors can be benignor malignant. Conventional diagnosis of a brain tumor by the radiologist, is done by examining a set of images produced by magnetic resonance imaging (MRI).Many computer-aided detection (CAD) systems have been developed in order to help the radiologist reach his goal of correctly classifying the MRI image. Convolutional neural networks (CNNs) have been widely used in the classification of medical images. This paper presents anovel CAD technique for the classification of brain tumors in MRI images The proposed system extracts features from the brain MRI images by utilizingthe strong energy compactness property exhibited by the Discrete Wavelet transform (DWT). The Wavelet features are then applied to a CNNto classify the input MRI image. Experimental results indicate that the proposed approach outperforms other commonly used methods and gives an overall accuracy of 98.5%.

SELECTION OF CITATIONS
SEARCH DETAIL